Skip to content

Conversation

@alirana01
Copy link
Collaborator

Description

Ping test added for the mirrors. We first query the HEAD with a GET request and if that fails we try a normal GET to verify latency and connection

Testing

Tested the installation via cli, gui using both simple and wizard based flows


Checklist

Before submitting a Pull Request, please ensure the following:

  • All CI checks (GH Actions) pass.
  • Documentation is updated as needed.
  • Tests are updated or added as necessary.
  • Code is well-commented, especially in complex areas.
  • Git history is clean — commits are squashed to the minimum necessary.

@alirana01 alirana01 self-assigned this Nov 12, 2025
@gemini-code-assist
Copy link

Summary of Changes

Hello @alirana01, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant enhancement to the mirror selection process by integrating latency-based ping checks. The primary goal is to improve the reliability and speed of downloads and installations by ensuring the application connects to the most performant mirrors. This change provides a more intelligent and user-friendly way to manage mirror preferences across both command-line and graphical interfaces.

Highlights

  • Mirror Latency Checks: Implemented a new feature to perform latency checks (ping) on available mirrors (IDF, tools, and PyPI) to determine their responsiveness.
  • Enhanced Mirror Selection: The mirror selection logic in both the CLI and GUI has been updated to utilize these latency scores, automatically prioritizing and displaying the fastest available mirrors.
  • UI/UX Improvements: The GUI now visually presents the latency (ping time) for each mirror and indicates unreachable mirrors with a 'timeout' status, improving user experience during mirror selection.
  • Dependency Updates: Added new Rust dependencies futures and url, and updated the reqwest library to include stream and json features, supporting the new network operations.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a latency check for mirrors, which is a great improvement for user experience. The implementation is solid, but there are several areas with significant code duplication in both the Rust backend and the Vue frontend. I've provided suggestions to refactor this duplicated logic into helper functions to improve maintainability. Additionally, I've identified a potential crash in a GUI command due to an unhandled error and a performance improvement opportunity by parallelizing mirror checks.

Hahihula

This comment was marked as outdated.

Comment on lines 519 to 554

/// Compute the latency map for the tools mirror
pub async fn get_tools_mirror_latency_map(&self) -> Result<HashMap<String, u32>> {
let available_mirrors = crate::get_idf_tools_mirrors_list()
.to_vec()
.iter()
.map(|s| s.to_string())
.collect::<Vec<String>>();
let mirror_latency_map =
crate::utils::calculate_mirror_latency_map(&available_mirrors).await;
Ok(mirror_latency_map)
}

/// Compute the latency map for the IDF mirror
pub async fn get_idf_mirror_latency_map(&self) -> Result<HashMap<String, u32>> {
let available_mirrors = crate::get_idf_mirrors_list()
.to_vec()
.iter()
.map(|s| s.to_string())
.collect::<Vec<String>>();
let mirror_latency_map =
crate::utils::calculate_mirror_latency_map(&available_mirrors).await;
Ok(mirror_latency_map)
}

/// Compute the latency map for the PyPI mirror
pub async fn get_pypi_mirror_latency_map(&self) -> Result<HashMap<String, u32>> {
let available_mirrors = crate::get_pypi_mirrors_list()
.to_vec()
.iter()
.map(|s| s.to_string())
.collect::<Vec<String>>();
let mirror_latency_map =
crate::utils::calculate_mirror_latency_map(&available_mirrors).await;
Ok(mirror_latency_map)
}
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Hahihula I think i made it redundant by adding these functions here or I may have miss understood something

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so why do you add them?

@alirana01 alirana01 requested a review from Hahihula November 24, 2025 14:59
Copy link
Collaborator

@Hahihula Hahihula left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you also use u32::MAX for unreachable on rust side and 0 on javascript side. avoid using magical numbers if possible. Usage of Option<> or enum will be better variant here.
You also have a lot of faulty code as well as code which will work only on happy path scenarios. Avoid using unwrap if you are not 100% sure it's ok to do so.
A lot of the logic is AntiIdiomatic.
Also, please turn off the automatic formatting features of your IDE and avoid copy&paste from LLMs, which changes the formatting of all the code. Unnecessary changes in code formatting are masking the history of changes in git. Please do not reformat part of code you otherwise did not touch.
You also completely ignored some of my comments from the last time. For example the error you have here with the stripping of port number from the url. I expect that after the fix you will also have test which covers this case.
The UI side logic can also be significantly simplified. There is also the issue of not easily understandable code with the converting u32::MAX to 0 and using positive and negative infinity alongside syntactic null and undefined. But the idea of bootstrapping the store after the app starts with values needed later is good and we should use it more broadly for example for getting OS params in one place.
I also think the functions mirror_entries_to_display and url_from_display_line really shouldn't exist at all

Ok(resp) if resp.status().is_success() => {
return Some(start.elapsed().as_millis().min(u32::MAX as u128) as u32);
}
_ => {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You will need to return Err here so the fallback on the calculate_mirror_latency_map level actually work.
now you still have the fallback here

}
}
/// Returns the base domain from a full URL.
fn get_base_url(url_str: &str) -> Option<String> {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function still removes the port information as i was warning you on the previous iteration. You moved it but not fixed it.
use something like:

if let Some(port) = url.port() {
      // Port is explicitly specified and non-default
      Some(format!("{}://{}:{}", scheme, host, port))
  } else {
      // No explicit port (or it's the default for the scheme)
      Some(format!("{}://{}", scheme, host))
  }

use url::Url;
use zstd::{decode_all, Decoder};

use crate::{
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please avoid needles reformatting/rearranging of imports as it masks history


for m in mirrors {
let url = m.as_str();
if !head_latency_failed {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please fix this multiple head_latency_failed checks

}

/// Turn `(url, latency)` tuples into display strings like `https://... (123 ms)` or `(... timeout)`.
pub fn mirror_entries_to_display(entries: &[MirrorEntry]) -> Vec<String> {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as this is formating fo cli part only, it should not be part of common library shared between cli and gui

.unwrap()
.0
.clone();
let mirror = match best_mirror.is_empty() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here, you select the mirror to be either the best or the default mirror, than emit in in the message and never use it!


let mirror = settings.idf_mirror.clone().unwrap_or_default();
let mut available_mirrors = idf_im_lib::get_idf_mirrors_list().to_vec();
let mirror = settings.idf_mirror.clone().unwrap_or_default();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

another unnecessary formating change... please avoid

}

/// Return URL -> score (lower is better). Unreachable mirrors get u32::MAX.
pub async fn calculate_mirror_latency_map(mirrors: &[String]) -> HashMap<String, u32> {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the function as it's written now will introduce potentially multiple 10s of second wait for timeout as it checks the mirrors sequentially, it also creates new reqwest::Client for every mirror. Creating a client is expensive (allocating connection pools, TLS configuration).

const list = (urls || []).map((url) => ({
value: url,
label: url,
ping: this.normalizePingValue(latencyMap ? latencyMap[url] : undefined),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as this is the only place you use normalizePingValue the function is there only for converting hardcoded undefined to hardcoded null...

let default_mirror = rust_i18n::t!("gui.installation.default_mirror").to_string();
let mirror = settings.idf_mirror.as_deref().unwrap_or(&default_mirror);
let default_mirror_str = rust_i18n::t!("gui.installation.default_mirror").to_string();
let default_mirror = settings
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is not only default mirror but potentially also user selected preferred mirror

@alirana01 alirana01 requested a review from Hahihula November 28, 2025 09:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants